QteaTensor via torch
- qredtea.torchapi.default_pytorch_backend(device='cpu', dtype=torch.complex128)[source]
Generate a default tensor backend for dense tensors, i.e., with a
QteaTorchTensor.Arguments
- dtypedata type, optional
Data type for pytorch. Default to to.complex128
- devicedevice specification, optional
Default to “cpu”. Available: “cpu”, “gpu”, “xla”
Returns
tensor_backend :
TensorBackend
- qredtea.torchapi.default_abelian_pytorch_backend(device='cpu', dtype=torch.complex128)[source]
Generate a default tensor backend for symmetric tensors, i.e., with a
QteaTorchTensor. The tensors support Abelian symmetries.Arguments
- dtypedata type, optional
Data type for pytorch. Default to to.complex128
- devicedevice specification, optional
Default to “cpu”. Available: “cpu”, “gpu”, “xla”
Returns
tensor_backend :
TensorBackend
Tensor class based on pytorch; pytorch supports both CPU and GPU in one framework.
- class qredtea.torchapi.QteaTorchTensor(links, ctrl='Z', are_links_outgoing=None, base_tensor_cls=None, dtype=torch.complex128, device=None)[source]
Tensor for Quantum TEA based on the pytorch tensors.
- add_update(other, factor_this=None, factor_other=None)[source]
Inplace addition as self = factor_this * self + factor_other * other.
Arguments
- othersame instance as self
Will be added to self. Unmodified on exit.
- factor_thisscalar
Scalar weight for tensor self.
- factor_otherscalar
Scalar weight for tensor other
- property are_links_outgoing
Define property of outgoing links as property (always False).
- attach_dummy_link(position, is_outgoing=True)[source]
Attach dummy link at given position (inplace update).
- property base_tensor_cls
Base tensor class.
- convert(dtype=None, device=None, stream=None)[source]
Convert underlying array to the specified data type inplace.
- static convert_operator_dict(op_dict, params=None, symmetries=None, generators=None, base_tensor_cls=None, dtype=torch.complex128, device='cpu')[source]
Iterate through an operator dict and convert the entries. Converts as well to rank-4 tensors.
Arguments
- op_dictinstance of
TNOperators Contains the operators as xp.ndarray.
- paramsdict, optional
To resolve operators being passed as callable.
- symmetries: list, optional, for compatability with symmetric tensors.
Must be empty list.
- generatorslist, optional, for compatability with symmetric tensors.
Must be empty list.
- base_tensor_clsNone, optional, for compatability with symmetric tensors.
No checks on this one here.
- dtypedata type for xp, optional
Specify data type. Default to to.complex128
- devicestr
Device for the simulation. Available “cpu” and “gpu” Default to “cpu”
Details
The conversion to rank-4 tensors is useful for future implementations, either to support adding interactions with a bond dimension greater than one between them or for symmetries. We add dummy links of dimension one. The order is (dummy link to the left, old link-1, old link-2, dummy link to the right).
- op_dictinstance of
- property device
Device where the tensor is stored.
- property dtype
Data type of the underlying arrays.
- property dtype_eps
Data type’s machine precision.
- static dummy_link(example_link)[source]
Return a dummy link which is always an int for base tensors.
- eig_api(matvec_func, links, conv_params, args_func=None, kwargs_func=None)[source]
Interface to hermitian eigenproblem
Arguments
- matvec_funccallable
Mulitplies “matrix” with “vector”
- linkslinks according to
QteaTensor Contain the dimension of the problem.
- conv_paramsinstance of
TNConvergenceParameters Settings for eigenproblem with Arnoldi method.
args_func : arguments for matvec_func
kwargs_func : keyword arguments for matvec_func
Returns
eigenvalues : scalar
eigenvectors : instance of
QteaTensor
- eig_api_arpack(matvec_func, links, conv_params, args_func=None, kwargs_func=None)[source]
Interface to hermitian eigenproblem via Arpack. Arguments see eig_api. Possible implementation is https://github.com/rfeinman/Torch-ARPACK.
- eig_api_qtea(matvec_func, conv_params, args_func=None, kwargs_func=None)[source]
Interface to hermitian eigenproblem via qtealeaves.solvers. Arguments see eig_api.
- property elem
Elements of the tensor.
- elementwise_abs_smaller_than(value)[source]
Return boolean if each tensor element is smaller than value
- eye_like(link)[source]
Generate identity matrix.
Arguments
- selfinstance of
QteaTensor Extract data type etc from this one here.
- linksame as returned by links property, here integer.
Dimension of the square, identity matrix.
- selfinstance of
- static free_memory_device()[source]
Free the unused device memory that is otherwise occupied by the cache. Otherwise cupy will keep the memory occupied for caching reasons. We follow the approach from https://stackoverflow.com/questions/70508960
- classmethod from_elem_array(tensor, dtype=None, device=None)[source]
New QteaTorchTensor from array
Arguments
- tensorto.tensor
Array for new tensor.
- dtypedata type, optional
Can allow to specify data type. If not None, it will convert. Default to None
- classmethod from_qteatensor(qteatensor, dtype=None, device=None)[source]
Convert QteaTensor based on numpy/cupy into QteaTorchTensor.
- fuse_links_update(fuse_low, fuse_high, is_link_outgoing=True)[source]
Fuses one set of links to a single link (inplace-update).
Parameters
- fuse_lowint
First index to fuse
- fuse_highint
Last index to fuse.
Example: if you want to fuse links 1, 2, and 3, fuse_low=1, fuse_high=3. Therefore the function requires links to be already sorted before in the correct order.
- get_diag_entries_as_int()[source]
Return diagonal entries of rank-2 tensor as integer on host and as numpy.
- get_submatrix(row_range, col_range)[source]
Extract a submatrix of a rank-2 tensor for the given rows / cols.
- property has_symmetry
Boolean flag if tensor encodes symmetries.
- kron(other, idxs=None)[source]
Perform the kronecker product between two tensors. By default, do it over all the legs, but you can also specify which legs should be kroned over. The legs over which the kron is not done should have the same dimension.
Parameters
- otherQteaTensor
Tensor to kron with self
- idxsTuple[int], optional
Indexes over which to perform the kron. If None, kron over all indeces. Default to None.
Returns
- QteaTensor
The kronned tensor
Details
Performing the kronecker product between a tensor of shape (2, 3, 4) and a tensor of shape (1, 2, 3) will result in a tensor of shape (2, 6, 12).
To perform the normal kronecker product between matrices just pass rank-2 tensors.
To perform kronecker product between vectors first transfor them in rank-2 tensors of shape (1, -1)
Performing the kronecker product only along some legs means that along that leg it is an elementwise product and not a kronecker. For Example, if idxs=(0, 2) for the tensors of shapes (2, 3, 4) and (1, 3, 2) the output will be of shape (2, 3, 8).
- property linear_algebra_library
Specification of the linear algebra library used as string torch`.
- property links
Here, as well dimension of tensor along each dimension.
- mask_to_device(mask)[source]
Send a mask to the device where the tensor is. (right now only CPU –> GPU, CPU –> CPU).
- mask_to_host(mask)[source]
Send a mask to the host where we need it for symmetric tensors, e.g., degeneracies. Return as numpy.
- classmethod mpi_recv(from_, comm, tn_mpi_types, tensor_backend)[source]
Send tensor via MPI.
Arguments
- from_integer
MPI process to receive tensor from.
comm : instance of MPI communicator to be used
- tn_mpi_typesdict
Dictionary mapping dtype to MPI data types.
tensor_backend : instance of
TensorBackend
- mpi_send(to_, comm, tn_mpi_types)[source]
Send tensor via MPI.
Arguments
- tointeger
MPI process to send tensor to.
comm : instance of MPI communicator to be used
- tn_mpi_typesdict
Dictionary mapping dtype to MPI data types.
- property ndim
Rank of the tensor.
- norm_sqrt()[source]
Calculate the square root of the norm of the tensor, i.e., sqrt( <tensor|tensor>).
- permute_rows_cols_update(inds)[source]
Permute rows and columns of rank-2 tensor with inds. Inplace update.
- prepare_eig_api(conv_params)[source]
Return variables for eigsh.
Returns
- kwargsdict
Keyword arguments for eigs call. If initial guess can be passed, key “v0” is set with value None
LinearOperator : None
eigsh : None
- random_unitary(link)[source]
Generate a random unitary matrix via performing a SVD on a random tensor.
Arguments
- selfinstance of
QteaTensor Extract data type etc from this one here.
- linksame as returned by links property, here integer.
Dimension of the square, random unitary matrix.
- selfinstance of
- classmethod read(filehandle, dtype, device, base_tensor_cls, cmplx=True, order='F')[source]
Read a tensor from file via QteaTensor.
- scale_link(link_weights, link_idx, do_inverse=False)[source]
Scale tensor along one link at link_idx with weights.
Arguments
- link_weightsnp.ndarray
Scalar weights, e.g., singular values.
- link_idxint
Link which should be scaled.
- do_inversebool, optional
If True, scale with inverse instead of multiplying with link weights. Default to False
Returns
updated_link : instance of
QteaTensor
- scale_link_update(link_weights, link_idx, do_inverse=False)[source]
Scale tensor along one link at link_idx with weights (inplace update).
Arguments
- link_weightsnp.ndarray
Scalar weights, e.g., singular values.
- link_idxint
Link which should be scaled.
- do_inversebool, optional
If True, scale with inverse instead of multiplying with link weights. Default to False
- set_diagonal_entry(position, value)[source]
Set the diagonal element in a rank-2 tensor (inplace update)
- set_matrix_entry(idx_row, idx_col, value)[source]
Set one element in a rank-2 tensor (inplace update)
- static set_missing_link(links, max_dim, are_links_outgoing=None)[source]
Calculate the property of a missing link in a list.
Arguments
- linkslist
Contains data like returned by property links, except for one element being None
- max_dimint
Maximal dimension of link allowed by convergence parameters or similar.
- are_links_outgoinglist of bools
Indicates link direction for symmetry tensors only.
- set_submatrix(row_range, col_range, tensor)[source]
Set a submatrix of a rank-2 tensor for the given rows / cols.
- set_subtensor_entry(corner_low, corner_high, tensor)[source]
Set a subtensor (potentially expensive as looping explicitly, inplace update).
Arguments
- corner_lowlist of ints
The lower index of each dimension of the tensor to set. Length must match rank of tensor self.
- corner_highlist of ints
The higher index of each dimension of the tensor to set. Length must match rank of tensor self.
- tensor
QteaTorchTensor Tensor to be set as subtensor. Rank must match tensor self. Dimensions must match corner_high - corner_low.
Examples
To set the tensor of shape 2x2x2 in a larger tensor self of shape 8x8x8 the corresponing call is in comparison to a numpy syntax:
self.set_subtensor_entry([2, 4, 2], [4, 6, 4], tensor)
self[2:4, 4:6, 2:4] = tensor
To be able to work with all ranks, we currently avoid the numpy syntax in our implementation.
- property shape
Dimension of tensor along each dimension.
- split_qr(legs_left, legs_right, perm_left=None, perm_right=None, is_q_link_outgoing=True)[source]
Split the tensor via a QR decomposition.
Parameters
- selfinstance of
QteaTensor Tensor upon which apply the QR
- legs_leftlist of int
Legs that will compose the rows of the matrix
- legs_rightlist of int
Legs that will compose the columns of the matrix
- perm_leftlist of int, optional
permutations of legs after the QR on left tensor
- perm_rightlist of int, optional
permutation of legs after the QR on right tensor
Returns
- tens_left: instance of
QteaTensor unitary tensor after the QR, i.e., Q.
- tens_right: instance of
QteaTensor upper triangular tensor after the QR, i.e., R
- selfinstance of
- split_qrte(tens_right, singvals_self, operator=None, conv_params=None, is_q_link_outgoing=True)[source]
Perform an Truncated ExpandedQR decomposition, generalizing the idea of https://arxiv.org/pdf/2212.09782.pdf for a general bond expansion given the isometry center of the network on tens_left. It should be rather general for three-legs tensors, and thus applicable with any tensor network ansatz. Notice that, however, you do not have full control on the approximation, since you know only a subset of the singular values truncated.
Parameters
- tens_left: xp.array
Left tensor
- tens_right: xp.array
Right tensor
- singvals_left: xp.array
Singular values array insisting on the link to the left of tens_left
- operator: xp.array or None
Operator to contract with the tensors. If None, no operator is contracted
Returns
- tens_left: ndarray
left tensor after the EQR
- tens_right: ndarray
right tensor after the EQR
- singvals: ndarray
singular values kept after the EQR
- singvals_cutted: ndarray
subset of thesingular values cutted after the EQR, normalized with the biggest singval
- split_svd(legs_left, legs_right, perm_left=None, perm_right=None, contract_singvals='N', conv_params=None, no_truncation=False, is_link_outgoing_left=True)[source]
Perform a truncated Singular Value Decomposition by first reshaping the tensor into a legs_left x legs_right matrix, and permuting the legs of the ouput tensors if needed. If the contract_singvals = (‘L’, ‘R’) it takes care of renormalizing the output tensors such that the norm of the MPS remains 1 even after a truncation.
Parameters
- selfinstance of
QteaTensor Tensor upon which apply the SVD
- legs_leftlist of int
Legs that will compose the rows of the matrix
- legs_rightlist of int
Legs that will compose the columns of the matrix
- perm_leftlist of int, optional
permutations of legs after the SVD on left tensor
- perm_rightlist of int, optional
permutation of legs after the SVD on right tensor
- contract_singvals: string, optional
- How to contract the singular values.
‘N’ : no contraction ‘L’ : to the left tensor ‘R’ : to the right tensor
- conv_params
TNConvergenceParameters, optional Convergence parameters to use in the procedure. If None is given, then use the default convergence parameters of the TN. Default to None.
- no_truncationboolean, optional
Allow to run without truncation Default to False (hence truncating by default)
Returns
- tens_left: instance of
QteaTensor left tensor after the SVD
- tens_right: instance of
QteaTensor right tensor after the SVD
- singvals: xp.ndarray
singular values kept after the SVD
- singvals_cut: xp.ndarray
singular values cut after the SVD, normalized with the biggest singval
- selfinstance of
- stack_link(other, link)[source]
Stack two tensors along a given link.
Arguments
- otherinstance of
QteaTorchTensor Links must match self up to the specified link.
- linkinteger
Stack along this link.
Returns
new_this : instance of :class:QteaTorchTensor`
- otherinstance of
- subtensor_along_link(link, lower, upper)[source]
Extract and return a subtensor select range (lower, upper) for one line.
- to_dense(true_copy=False)[source]
Return dense tensor (if true_copy=False, same object may be returned).
- to_dense_singvals(s_vals, true_copy=False)[source]
Convert singular values to dense vector without symmetries.
- vector_with_dim_like(dim, dtype=None)[source]
Generate a vector in the native array of the base tensor.
- class qredtea.torchapi.qteatorchtensor.DataMoverPytorch[source]
Data mover to move QteaTorchTensor between numpy and cupy
- async_move(tensor, device)[source]
Move the tensor tensor to the device device asynchronously with respect to the main computational stream
Parameters
- tensor_AbstractTensor
The tensor to be moved
- device: str
The device where to move the tensor
- property device_memory
Current memory occupied in the device